Late-life depression (LLD) is a highly prevalent mood disorder occurring in older adults and is frequently accompanied by cognitive impairment (CI). Studies have shown that LLD may increase the risk of Alzheimer's disease (AD). However, the heterogeneity of presentation of geriatric depression suggests that multiple biological mechanisms may underlie it. Current biological research on LLD progression incorporates machine learning that combines neuroimaging data with clinical observations. There are few studies on incident cognitive diagnostic outcomes in LLD based on structural MRI (sMRI). In this paper, we describe the development of a hybrid representation learning (HRL) framework for predicting cognitive diagnosis over 5 years based on T1-weighted sMRI data. Specifically, we first extract prediction-oriented MRI features via a deep neural network, and then integrate them with handcrafted MRI features via a Transformer encoder for cognitive diagnosis prediction. Two tasks are investigated in this work, including (1) identifying cognitively normal subjects with LLD and never-depressed older healthy subjects, and (2) identifying LLD subjects who developed CI (or even AD) and those who stayed cognitively normal over five years. To the best of our knowledge, this is among the first attempts to study the complex heterogeneous progression of LLD based on task-oriented and handcrafted MRI features. We validate the proposed HRL on 294 subjects with T1-weighted MRIs from two clinically harmonized studies. Experimental results suggest that the HRL outperforms several classical machine learning and state-of-the-art deep learning methods in LLD identification and prediction tasks.
translated by 谷歌翻译
深度神经网络(DNNS)在训练过程中容易受到后门攻击的影响。该模型以这种方式损坏正常起作用,但是当输入中的某些模式触发时,会产生预定义的目标标签。现有防御通常依赖于通用后门设置的假设,其中有毒样品共享相同的均匀扳机。但是,最近的高级后门攻击表明,这种假设在动态后门中不再有效,在动态后门中,触发者因输入而异,从而击败了现有的防御。在这项工作中,我们提出了一种新颖的技术BEATRIX(通过革兰氏矩阵检测)。 BEATRIX利用革兰氏矩阵不仅捕获特征相关性,还可以捕获表示形式的适当高阶信息。通过从正常样本的激活模式中学习类条件统计,BEATRIX可以通过捕获激活模式中的异常来识别中毒样品。为了进一步提高识别目标标签的性能,BEATRIX利用基于内核的测试,而无需对表示分布进行任何先前的假设。我们通过与最先进的防御技术进行了广泛的评估和比较来证明我们的方法的有效性。实验结果表明,我们的方法在检测动态后门时达到了91.1%的F1得分,而最新技术只能达到36.9%。
translated by 谷歌翻译
随着机器学习技术的发展,研究的注意力已从单模式学习转变为多模式学习,因为现实世界中的数据以不同的方式存在。但是,多模式模型通常比单模式模型具有更多的信息,并且通常将其应用于敏感情况,例如医疗报告生成或疾病鉴定。与针对机器学习分类器的现有会员推断相比,我们关注的是多模式模型的输入和输出的问题,例如不同的模式,例如图像字幕。这项工作通过成员推理攻击的角度研究了多模式模型的隐私泄漏,这是确定数据记录是否涉及模型培训过程的过程。为了实现这一目标,我们提出了多种模型的成员资格推理(M^4i),分别使用两种攻击方法来推断成员身份状态,分别为基于公表示的(MB)M^4i和基于特征(FB)M^4i。更具体地说,MB M^4i在攻击时采用相似性指标来推断目标数据成员资格。 FB M^4i使用预先训练的阴影多模式提取器来通过比较提取的输入和输出功能的相似性来实现数据推理攻击的目的。广泛的实验结果表明,两种攻击方法都可以实现强大的性能。在不受限制的情况下,平均可以获得攻击成功率的72.5%和94.83%。此外,我们评估了针对我们的攻击的多种防御机制。 M^4i攻击的源代码可在https://github.com/multimodalmi/multimodal-membership-inference.git上公开获得。
translated by 谷歌翻译
在本说明中,我们介绍了如何使用波动率指数(VIX)进行定量策略,以提高夏普比率并降低交易风险。此过程中的信号是每天进行交易的指标。最后,我们在SH510300和SH510050资产上分析了此过程。通过测量夏普比率,最大降低和Calmar比率来评估这些策略。但是,始终存在交易损失的风险。测试的结果只是该方法如何工作的示例。没有关于真实市场地位的建议提出的要求。
translated by 谷歌翻译
Video classification systems are vulnerable to adversarial attacks, which can create severe security problems in video verification. Current black-box attacks need a large number of queries to succeed, resulting in high computational overhead in the process of attack. On the other hand, attacks with restricted perturbations are ineffective against defenses such as denoising or adversarial training. In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system. StyleFool first utilizes color theme proximity to select the best style image, which helps avoid unnatural details in the stylized videos. Meanwhile, the target class confidence is additionally considered in targeted attacks to influence the output distribution of the classifier by moving the stylized video closer to or even across the decision boundary. A gradient-free method is then employed to further optimize the adversarial perturbations. We carry out extensive experiments to evaluate StyleFool on two standard datasets, UCF-101 and HMDB-51. The experimental results demonstrate that StyleFool outperforms the state-of-the-art adversarial attacks in terms of both the number of queries and the robustness against existing defenses. Moreover, 50% of the stylized videos in untargeted attacks do not need any query since they can already fool the video classification model. Furthermore, we evaluate the indistinguishability through a user study to show that the adversarial samples of StyleFool look imperceptible to human eyes, despite unrestricted perturbations.
translated by 谷歌翻译
Existing integrity verification approaches for deep models are designed for private verification (i.e., assuming the service provider is honest, with white-box access to model parameters). However, private verification approaches do not allow model users to verify the model at run-time. Instead, they must trust the service provider, who may tamper with the verification results. In contrast, a public verification approach that considers the possibility of dishonest service providers can benefit a wider range of users. In this paper, we propose PublicCheck, a practical public integrity verification solution for services of run-time deep models. PublicCheck considers dishonest service providers, and overcomes public verification challenges of being lightweight, providing anti-counterfeiting protection, and having fingerprinting samples that appear smooth. To capture and fingerprint the inherent prediction behaviors of a run-time model, PublicCheck generates smoothly transformed and augmented encysted samples that are enclosed around the model's decision boundary while ensuring that the verification queries are indistinguishable from normal queries. PublicCheck is also applicable when knowledge of the target model is limited (e.g., with no knowledge of gradients or model parameters). A thorough evaluation of PublicCheck demonstrates the strong capability for model integrity breach detection (100% detection accuracy with less than 10 black-box API queries) against various model integrity attacks and model compression attacks. PublicCheck also demonstrates the smooth appearance, feasibility, and efficiency of generating a plethora of encysted samples for fingerprinting.
translated by 谷歌翻译
联合学习(FL)在许多分散的用户中训练全球模型,每个用户都有本地数据集。与传统的集中学习相比,FL不需要直接访问本地数据集,因此旨在减轻数据隐私问题。但是,由于推理攻击,包括成员推理,属性推理和数据反演,FL中的数据隐私泄漏仍然存在。在这项工作中,我们提出了一种新型的隐私推理攻击,创造的偏好分析攻击(PPA),它准确地介绍了本地用户的私人偏好,例如,最喜欢(不喜欢)来自客户的在线购物中的(不喜欢)项目和最常见的表达式从用户的自拍照中。通常,PPA可以在本地客户端(用户)的特征上介绍top-k(即,尤其是k = 1、2、3和k = 1)的偏好。我们的关键见解是,本地用户模型的梯度变化对给定类别的样本比例(尤其是大多数(少数)类别的样本比例具有明显的敏感性。通过观察用户模型对类的梯度敏感性,PPA可以介绍用户本地数据集中类的样本比例,从而公开用户对类的偏好。 FL的固有统计异质性进一步促进了PPA。我们使用四个数据集(MNIST,CIFAR10,RAF-DB和PRODUCTS-10K)广泛评估了PPA的有效性。我们的结果表明,PPA分别达到了MNIST和CIFAR10的90%和98%的TOP-1攻击精度。更重要的是,在实际的购物商业商业场景(即产品-10k)和社交网络(即RAF-DB)中,PPA在前一种情况下,PPA获得了78%的TOP-1攻击精度,以推断出最有序的物品(即作为商业竞争对手),在后一种情况下,有88%来推断受害者用户最常见的面部表情,例如恶心。
translated by 谷歌翻译
可提供许多开源和商业恶意软件探测器。然而,这些工具的功效受到新的对抗性攻击的威胁,由此恶意软件试图使用例如机器学习技术来逃避检测。在这项工作中,我们设计了依赖于特征空间和问题空间操纵的对抗逃避攻击。它使用可扩展性导向特征选择来最大限度地通过识别影响检测的最关键的特征来最大限度地逃避。然后,我们将此攻击用作评估若干最先进的恶意软件探测器的基准。我们发现(i)最先进的恶意软件探测器容易受到简单的逃避策略,并且可以使用现成的技术轻松欺骗; (ii)特征空间操纵和问题空间混淆可以组合起来,以便在不需要对探测器的白色盒子理解的情况下实现逃避; (iii)我们可以使用解释性方法(例如,Shap)来指导特征操纵并解释攻击如何跨多个检测器传输。我们的调查结果阐明了当前恶意软件探测器的弱点,以及如何改善它们。
translated by 谷歌翻译
深度神经网络容易受到来自对抗性投入的攻击,并且最近,特洛伊木马误解或劫持模型的决定。我们通过探索有界抗逆性示例空间和生成的对抗网络内的自然输入空间来揭示有界面的对抗性实例 - 通用自然主义侵害贴片的兴趣类 - 我们呼叫TNT。现在,一个对手可以用一个自然主义的补丁来手臂自己,不太恶意,身体上可实现,高效 - 实现高攻击成功率和普遍性。 TNT是普遍的,因为在场景中的TNT中捕获的任何输入图像都将:i)误导网络(未确定的攻击);或ii)迫使网络进行恶意决定(有针对性的攻击)。现在,有趣的是,一个对抗性补丁攻击者有可能发挥更大的控制水平 - 选择一个独立,自然的贴片的能力,与被限制为嘈杂的扰动的触发器 - 到目前为止只有可能与特洛伊木马攻击方法有可能干扰模型建设过程,以嵌入风险发现的后门;但是,仍然意识到在物理世界中部署的补丁。通过对大型视觉分类任务的广泛实验,想象成在其整个验证集50,000张图像中进行评估,我们展示了TNT的现实威胁和攻击的稳健性。我们展示了攻击的概括,以创建比现有最先进的方法实现更高攻击成功率的补丁。我们的结果表明,攻击对不同的视觉分类任务(CIFAR-10,GTSRB,PUBFIG)和多个最先进的深神经网络,如WieredEnet50,Inception-V3和VGG-16。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译